Search Results
SysML 19: Jia Zhihao, Beyond Data and Model Parallelism for Deep Neural Networks
SysML 19: Zhihao Jia, Optimizing DNN Computation with Relaxed Graph Substitutions
Model vs Data Parallelism in Machine Learning
SysML 19: Anand Jayarajan, Priority-based Parameter Propagation for Distributed DNN Training
SysML 19: Jungwook Choi, Accurate and Efficient 2-bit Quantized Neural Networks
USENIX ATC '19 - NeuGraph: Parallel Deep Neural Network Computation on Large Graphs
SysML 19: Adam Lerer, Pytorch-BigGraph: A Large Scale Graph Embedding System
A Layer-Parallel Approach for Training Deep Neural Networks --- Eric Cyr
SysML 19: Sayed Hadi Hashemi, TicTac
Exploiting Parallelism in Large Scale Deep Learning Model Training: Chips to Systems to Algorithms
On the Acceleration of Deep Learning Model Parallelism With Staleness
AI/ML, Neural Networks & the future of analytics: Training Deep Neural Networks in parallel